人通常通过按音乐形式组织元素来表达音乐思想来创作音乐。但是,对于基于神经网络的音乐生成,由于缺乏音乐形式的标签数据,很难这样做。在本文中,我们开发了Meloform,该系统是使用专家系统和神经网络以音乐形式生成旋律的系统。具体而言,1)我们设计了一个专家系统,可以通过开发从图案到短语的音乐元素到并根据预授予的音乐形式进行重复和变化的部分来生成旋律; 2)考虑到产生的旋律缺乏音乐丰富性,我们设计了一个基于变压器的改进模型,以改善旋律而不改变其音乐形式。 Meloform享有专家系统和通过神经模型的音乐丰富性学习的精确音乐形式控制的优势。主观和客观的实验评估都表明,MeloForm以97.79%的精度生成具有精确的音乐形式控制的旋律,并且在主观评估评分方面的表现优于基线系统0.75、0.50、0.50、0.86和0.89,其结构,主题,丰富性和整体质量和整体质量无需主观评估,而没有主观评估。任何标记的音乐形式数据。此外,Meloform可以支持各种形式,例如诗歌和合唱形式,隆多形式,变异形式,奏鸣曲形式,等等。
translated by 谷歌翻译
最近的2D-3D人类姿势估计工作倾向于利用人体骨架的拓扑形成的图形结构。但是,我们认为这种骨架拓扑太稀疏,无法反映身体结构并遭受严重的2D-3D模糊问题。为了克服这些弱点,我们提出了一种新颖的图表卷积网络架构,层次图形网络(HGN)。它基于我们的多尺度图结构建筑策略产生的密度图形拓扑,从而提供更精细的几何信息。所提出的架构包含三个并行组织的稀疏微小表示子网,其中通过新颖的特征融合策略处理多尺度图形结构特征,并通过新颖的特征融合策略进行交换信息,导致丰富的分层表示。我们还介绍了3D粗网格约束,以进一步提高与细节相关的特征学习。广泛的实验表明,我们的HGN通过减少的网络参数实现了最先进的性能
translated by 谷歌翻译
磁共振成像(MRI)图像中的小病变对于多种疾病的临床诊断至关重要。但是,MRI质量很容易被各种噪声降解,这可以极大地影响小病变的诊断准确性。尽管已经提出了一些用于降级MR图像的方法,但缺乏提高特定于任务的降级方法来提高小病变的诊断信心。在这项工作中,我们建议通过体素杂种残留MLP-CNN模型来降低具有小病变的三维(3D)MR图像。我们结合了基本的深度学习体系结构MLP和CNN,以获得适当的固有偏差,以通过添加残差连接来利用远距离信息,以使图像降低并整合MLP和CNN中的每个输出层。我们在720 T2-Flair脑图像上评估了所提出的方法,其在不同的噪声水平下具有较小的病变。结果表明,与最先进的方法相比,在定量和视觉评估中,我们的方法在测试数据集上具有优势。此外,两名经验丰富的放射科医生同意,在中等和高噪声水平下,我们的方法在恢复小病变和整体图像质量方面优于其他方法。我们的方法的实现可在https://github.com/laowangbobo/Residual_MLP_CNN_MIXER上获得。
translated by 谷歌翻译
信息指导的采样(IDS)揭示了其作为增强学习(RL)的数据效率算法的潜力。但是,对马尔可夫决策过程(MDP)的ID的理论理解仍然有限。我们开发了新颖的信息理论工具,以限制有关学习目标的信息比和累积信息获得。我们的理论结果阐明了选择学习目标的重要性,以便从业者可以平衡计算和后悔的界限。结果,我们为香草IDS提供了先前的贝叶斯遗憾界限,该范围在表格有限的摩尼子MDP下学习了整个环境。此外,我们提出了一种计算效率的正规化ID,该ID可以最大化添加剂形式而不是比率形式,并表明它具有与香草-IDS相同的遗憾。借助利率延伸理论,我们通过学习一个代孕,信息不足的环境来改善遗憾。此外,我们将分析扩展到线性MDP,并证明了汤普森采样作为副产品的类似遗憾界限。
translated by 谷歌翻译
信息指导的采样(IDS)最近证明了其作为数据效率增强学习算法的潜力。但是,目前尚不清楚当可用上下文信息时,要优化的信息比的正确形式是什么。我们通过两个上下文强盗问题研究IDS设计:具有图形反馈和稀疏线性上下文匪徒的上下文强盗。我们证明了上下文ID比条件ID的优势,并强调考虑上下文分布的重要性。主要信息是,智能代理人应该在有条件的ID可能是近视的情况下对未来看不见的环境有益的行动进行更多的投资。我们进一步提出了基于Actor-Critic的上下文ID的计算效率版本,并在神经网络上下文的强盗上进行经验评估。
translated by 谷歌翻译
The current popular two-stream, two-stage tracking framework extracts the template and the search region features separately and then performs relation modeling, thus the extracted features lack the awareness of the target and have limited target-background discriminability. To tackle the above issue, we propose a novel one-stream tracking (OSTrack) framework that unifies feature learning and relation modeling by bridging the template-search image pairs with bidirectional information flows. In this way, discriminative target-oriented features can be dynamically extracted by mutual guidance. Since no extra heavy relation modeling module is needed and the implementation is highly parallelized, the proposed tracker runs at a fast speed. To further improve the inference efficiency, an in-network candidate early elimination module is proposed based on the strong similarity prior calculated in the one-stream framework. As a unified framework, OSTrack achieves state-of-the-art performance on multiple benchmarks, in particular, it shows impressive results on the one-shot tracking benchmark GOT-10k, i.e., achieving 73.7% AO, improving the existing best result (SwinTrack) by 4.3\%. Besides, our method maintains a good performance-speed trade-off and shows faster convergence. The code and models are available at https://github.com/botaoye/OSTrack.
translated by 谷歌翻译
雇用无人驾驶航空公司(无人机)吸引了日益增长的兴趣,并成为互联网(物联网)网络中的数据收集技术的最先进技术。在本文中,目的是最大限度地减少UAV-IOT系统的总能耗,我们制定了联合设计了UAV的轨迹和选择IOT网络中的群集头作为受约束的组合优化问题的问题,该问题被归类为NP-努力解决。我们提出了一种新的深度加强学习(DRL),其具有顺序模型策略,可以通过无监督方式有效地学习由UAV的轨迹设计来实现由序列到序列神经网络表示的策略。通过广泛的模拟,所获得的结果表明,与其他基线算法相比,所提出的DRL方法可以找到无人机的轨迹,这些轨迹需要更少的能量消耗,并实现近乎最佳性能。此外,仿真结果表明,我们所提出的DRL算法的训练模型具有出色的概括能力,对更大的问题尺寸而没有必要恢复模型。
translated by 谷歌翻译
Unsupervised domain adaptation (UDA) for semantic segmentation is a promising task freeing people from heavy annotation work. However, domain discrepancies in low-level image statistics and high-level contexts compromise the segmentation performance over the target domain. A key idea to tackle this problem is to perform both image-level and feature-level adaptation jointly. Unfortunately, there is a lack of such unified approaches for UDA tasks in the existing literature. This paper proposes a novel UDA pipeline for semantic segmentation that unifies image-level and feature-level adaptation. Concretely, for image-level domain shifts, we propose a global photometric alignment module and a global texture alignment module that align images in the source and target domains in terms of image-level properties. For feature-level domain shifts, we perform global manifold alignment by projecting pixel features from both domains onto the feature manifold of the source domain; and we further regularize category centers in the source domain through a category-oriented triplet loss and perform target domain consistency regularization over augmented target domain images. Experimental results demonstrate that our pipeline significantly outperforms previous methods. In the commonly tested GTA5$\rightarrow$Cityscapes task, our proposed method using Deeplab V3+ as the backbone surpasses previous SOTA by 8%, achieving 58.2% in mIoU.
translated by 谷歌翻译
Different people speak with diverse personalized speaking styles. Although existing one-shot talking head methods have made significant progress in lip sync, natural facial expressions, and stable head motions, they still cannot generate diverse speaking styles in the final talking head videos. To tackle this problem, we propose a one-shot style-controllable talking face generation framework. In a nutshell, we aim to attain a speaking style from an arbitrary reference speaking video and then drive the one-shot portrait to speak with the reference speaking style and another piece of audio. Specifically, we first develop a style encoder to extract dynamic facial motion patterns of a style reference video and then encode them into a style code. Afterward, we introduce a style-controllable decoder to synthesize stylized facial animations from the speech content and style code. In order to integrate the reference speaking style into generated videos, we design a style-aware adaptive transformer, which enables the encoded style code to adjust the weights of the feed-forward layers accordingly. Thanks to the style-aware adaptation mechanism, the reference speaking style can be better embedded into synthesized videos during decoding. Extensive experiments demonstrate that our method is capable of generating talking head videos with diverse speaking styles from only one portrait image and an audio clip while achieving authentic visual effects. Project Page: https://github.com/FuxiVirtualHuman/styletalk.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译